4 research outputs found

    Robotic Burst Imaging for Light-Constrained 3D Reconstruction

    Get PDF
    This thesis proposes a novel input scheme, robotic burst, to improve vision-based 3D reconstruction for robots operating in low-light conditions, where existing state-of-the-art robotic vision algorithms struggle due to low signal-to-noise ratio in low-light images. We aim to improve the correspondence search stage of feature-based reconstruction using robotic burst imaging, including burst-merged images, a burst feature finder, and an end-to-end learning-based feature extractor. Firstly, we establish the use of robotic burst imaging to compute burst-merged images for feature-based reconstruction. We then develop a burst feature finder that locates features with well-defined scale and apparent motion on a burst to deal with limitations of burst-merged images such as misalignment at strong noise. To improve feature matches in burst-based reconstruction, we also present an end-to-end learning-based feature extractor that finds well-defined scale features directly on light-constrained bursts. We evaluate our methods against state-of-the-art reconstruction methods for conventional imaging that uses both classical and learning-based feature extractors. We validate our novel input scheme using burst imagery captured on a robotic arm and drones. We demonstrate progressive improvements in low-light reconstruction using our burst-based methods against conventional approaches and overall, converging 90% of all scenes captured in millilux conditions that otherwise converge with 10% success rate using conventional methods. This work opens up new avenues for applications, including autonomous driving and drone delivery at night, mining, and behavioral studies on nocturnal animals

    BuFF: Burst Feature Finder for Light-Constrained 3D Reconstruction

    Full text link
    Robots operating at night using conventional vision cameras face significant challenges in reconstruction due to noise-limited images. Previous work has demonstrated that burst-imaging techniques can be used to partially overcome this issue. In this paper, we develop a novel feature detector that operates directly on image bursts that enhances vision-based reconstruction under extremely low-light conditions. Our approach finds keypoints with well-defined scale and apparent motion within each burst by jointly searching in a multi-scale and multi-motion space. Because we describe these features at a stage where the images have higher signal-to-noise ratio, the detected features are more accurate than the state-of-the-art on conventional noisy images and burst-merged images and exhibit high precision, recall, and matching performance. We show improved feature performance and camera pose estimates and demonstrate improved structure-from-motion performance using our feature detector in challenging light-constrained scenes. Our feature finder provides a significant step towards robots operating in low-light scenarios and applications including night-time operations.Comment: 7 pages, 9 figures, 2 tables, for associated project page, see https://roboticimaging.org/Projects/BuFF

    LOW COST COLLISION AVOIDANCE SYSTEM ON HOLONOMIC AND NON- HOLONOMIC MOBILE ROBOTS

    Get PDF
    Technological advancement in industries necessitates development in autonomous mobile robots with obstacle avoidance for applications in industries, hospitals and educational environment. Holonomic motion allows the mobile robots to move instantaneously in any direction with no constraints while non-holonomic motion restricts the motion in limited directions. This study presents the comparison of obstacle avoidance performance using low cost sensors on holonomic and non-holonomic mobile robots. The robot prototypes are developed and implemented with stable control system for better mobility. Experimental results show that the omnidirectional mobile robots avoid collisions with limited space and less time while non-holonomic mobile platforms exhibits reduced computational complexity and efficient implementation. The developed mobile robots can be used for diverse task specification applications
    corecore